Skip to content

Support decomposition of torch.broadcast_tensors #4253

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 12 commits into
base: main
Choose a base branch
from

Conversation

vinitdeodhar
Copy link
Contributor

@vinitdeodhar vinitdeodhar commented Jul 2, 2025

Added support for torch.broadcast op and decomposition of broadcast_tensor to a sequence of aten.broadcast_to

closes #4240

@sahas3 sahas3 requested review from vivekkhandelwal1 and sahas3 July 3, 2025 14:26
@vinitdeodhar
Copy link
Contributor Author

@sahas3 , @vivekkhandelwal1 can you please review ? The failing checks are unrelated to the change and similar failures are seen for other recent PRs

@vinitdeodhar
Copy link
Contributor Author

Hi @vivekkhandelwal1, can you please provide your feedback ?

@vinitdeodhar vinitdeodhar requested a review from sahas3 July 18, 2025 18:00
@vinitdeodhar
Copy link
Contributor Author

@zjgarvey I am not sure who is the code owner who could review this change. Can you please provide feedback on this if possible ?

@zjgarvey zjgarvey self-requested a review August 7, 2025 17:30
Copy link
Collaborator

@zjgarvey zjgarvey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! A few things to address before approval (most importantly the potentially wrong check on shape index).

bool unknownSize = false;
for (auto [idx, shape] : llvm::enumerate(shapes)) {
if (ranks[idx] - i - 1 < shape.size() &&
shape[ranks[idx] - i - 1] == kUnknownSize) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bit confusing. 0 <= i < maxRank, so it seems redundant to check ranks[idx] - i - 1 < ranks[idx]. Perhaps the check should be ranks[idx] - i - 1 >= 0 ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zjgarvey , this check ensures that out of bounds access to shape is not performed in case input tensors have different ranks. The e2e test BroadcastTensorsModuleList_multiple_ranks covers this case. Let me know if you have a suggestion on making this check easier to read

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe "confusing" isn't the correct word. I'm rather certain the check is wrong.

E.g. For ranks = {1, 2}, maxRank = 2, then when idx = 0 and i=1, then ranks[idx] - i - 1 = 1 - 1 - 1 = -1, which you are using to access shape for the tensor at idx = 0.

Whereas, ranks[idx] - i - 1 < ranks[idx] is automatic from the fact that i >= 0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support for torch.broadcast_tensors in linalg backend
3 participants